Comment Re:Not local inference (Score 1) 56
I was basing my comment about local model support on this:
https://github.com/moltbot/moltbot/issues/2838
Clawdbot currently supports model providers, but configuring local inference engines such as vLLM and Ollama is not straightforward or fully documented. Users running local LLMs (GPU / on-prem / WSL) face friction when attempting to integrate these providers reliably.
Adding official support for vLLM and Ollama as first-class providers would significantly improve local deployment, performance, and developer experience.
So it sounds like it is in the realm of possibility, but being neither documented nor straightforward sounds beyond the reach of most normal users.